Expose Parallelism counters planned/execute in pg_stat_statements

  • Jump to comment-1
    asotolongo@gmail.com2022-07-21T22:26:58+00:00
    Hi all: Here's a patch to add counters about  planned/executed  for parallelism  to pg_stat_statements, as a way to follow-up on if the queries are planning/executing with parallelism, this can help to understand if you have a good/bad configuration or if your hardware is enough We decided to store information about the number of times is planned  and  the number of times executed  the parallelism by queries Regards Anthony
    • Jump to comment-1
      pryzby@telsasoft.com2022-07-22T00:35:56+00:00
      On Thu, Jul 21, 2022 at 06:26:58PM -0400, Anthony Sotolongo wrote: > Hi all: > Here's a patch to add counters about  planned/executed  for parallelism  to > pg_stat_statements, as a way to follow-up on if the queries are > planning/executing with parallelism, this can help to understand if you have > a good/bad configuration or if your hardware is enough +1, I was missing something like this before, but it didn't occur to me to use PSS: https://www.postgresql.org/message-id/20200310190142.GB29065@telsasoft.com > My hope is to answer to questions like these: > > . is query (ever? usually?) using parallel paths? > . is query usefully using parallel paths? > . what queries are my max_parallel_workers(_per_process) being used for ? > . Are certain longrunning or frequently running queries which are using > parallel paths using all max_parallel_workers and precluding other queries > from using parallel query ? Or, are semi-short queries sometimes precluding > longrunning queries from using parallelism, when the long queries would > better benefit ? This patch is storing the number of times the query was planned/executed using parallelism, but not the number of workers. Would it make sense to instead store the the *number* of workers launched/planned ? Otherwise, it might be that a query is consistently planned to use a large number of workers, but then runs with few. I'm referring to the fields shown in "explain/analyze". (Then, the 2nd field should be renamed to "launched"). Workers Planned: 2 Workers Launched: 2 I don't think this is doing the right thing for prepared statements, like PQprepare()/PQexecPrepared(), or SQL: PREPARE p AS SELECT; EXECUTE p; Right now, the docs say that it shows the "number of times the statement was planned to use parallelism", but the planning counter is incremented during each execution. PSS already shows "calls" and "plans" separately. The documentation doesn't mention prepared statements as a reason why they wouldn't match, which seems like a deficiency. This currently doesn't count parallel workers used by utility statements, such as CREATE INDEX and VACUUM (see max_parallel_maintenance_workers). If that's not easy to do, mention that in the docs as a limitation. You should try to add some test to contrib/pg_stat_statements/sql, or add parallelism test to an existing test. Note that the number of parallel workers launched isn't stable, so you can't test that part.. You modified pgss_store() to take two booleans, but pass "NULL" instead of "false". Curiously, of all the compilers in cirrusci, only MSVC complained .. "planed" is actually spelled "planned", with two enns. The patch has some leading/trailing whitespace (maybe shown by git log depending on your configuration). Please add this patch to the next commitfest. https://commitfest.postgresql.org/39/ -- Justin
      • Jump to comment-1
        asotolongo@gmail.com2022-07-22T15:17:52+00:00
        On 21-07-22 20:35, Justin Pryzby wrote: > On Thu, Jul 21, 2022 at 06:26:58PM -0400, Anthony Sotolongo wrote: >> Hi all: >> Here's a patch to add counters about  planned/executed  for parallelism  to >> pg_stat_statements, as a way to follow-up on if the queries are >> planning/executing with parallelism, this can help to understand if you have >> a good/bad configuration or if your hardware is enough > +1, I was missing something like this before, but it didn't occur to me to use > PSS: First of all, thanks for review the the patch and for the comments > https://www.postgresql.org/message-id/20200310190142.GB29065@telsasoft.com >> My hope is to answer to questions like these: >> >> . is query (ever? usually?) using parallel paths? >> . is query usefully using parallel paths? >> . what queries are my max_parallel_workers(_per_process) being used for ? >> . Are certain longrunning or frequently running queries which are using >> parallel paths using all max_parallel_workers and precluding other queries >> from using parallel query ? Or, are semi-short queries sometimes precluding >> longrunning queries from using parallelism, when the long queries would >> better benefit ? > This patch is storing the number of times the query was planned/executed using > parallelism, but not the number of workers. Would it make sense to instead > store the the *number* of workers launched/planned ? Otherwise, it might be > that a query is consistently planned to use a large number of workers, but then > runs with few. I'm referring to the fields shown in "explain/analyze". (Then, > the 2nd field should be renamed to "launched"). > > Workers Planned: 2 > Workers Launched: 2 The main idea of the patch is to store the number of times the statements were planned and executed in parallel, not the number of workers used in the execution. Of course, what you mention can be helpful, it will be given a review to see how it can be achieved > > I don't think this is doing the right thing for prepared statements, like > PQprepare()/PQexecPrepared(), or SQL: PREPARE p AS SELECT; EXECUTE p; > > Right now, the docs say that it shows the "number of times the statement was > planned to use parallelism", but the planning counter is incremented during > each execution. PSS already shows "calls" and "plans" separately. The > documentation doesn't mention prepared statements as a reason why they wouldn't > match, which seems like a deficiency. We will check it and see how  fix it > > This currently doesn't count parallel workers used by utility statements, such > as CREATE INDEX and VACUUM (see max_parallel_maintenance_workers). If that's > not easy to do, mention that in the docs as a limitation. We will update the documentation with information related to this comment > > You should try to add some test to contrib/pg_stat_statements/sql, or add > parallelism test to an existing test. Note that the number of parallel workers > launched isn't stable, so you can't test that part.. > > You modified pgss_store() to take two booleans, but pass "NULL" instead of > "false". Curiously, of all the compilers in cirrusci, only MSVC complained .. > > "planed" is actually spelled "planned", with two enns. > > The patch has some leading/trailing whitespace (maybe shown by git log > depending on your configuration). OK, we will fix it > Please add this patch to the next commitfest. > https://commitfest.postgresql.org/39/ >
        • Jump to comment-1
          rjuju123@gmail.com2022-07-22T16:08:30+00:00
          Hi, On Fri, Jul 22, 2022 at 11:17:52AM -0400, Anthony Sotolongo wrote: > > On 21-07-22 20:35, Justin Pryzby wrote: > > On Thu, Jul 21, 2022 at 06:26:58PM -0400, Anthony Sotolongo wrote: > > > Hi all: > > > Here's a patch to add counters about  planned/executed  for parallelism  to > > > pg_stat_statements, as a way to follow-up on if the queries are > > > planning/executing with parallelism, this can help to understand if you have > > > a good/bad configuration or if your hardware is enough > > +1, I was missing something like this before, but it didn't occur to me to use > > PSS: > > First of all, thanks for review the the patch and for the comments > > > > https://www.postgresql.org/message-id/20200310190142.GB29065@telsasoft.com > > > My hope is to answer to questions like these: > > > > > > . is query (ever? usually?) using parallel paths? > > > . is query usefully using parallel paths? > > > . what queries are my max_parallel_workers(_per_process) being used for ? > > > . Are certain longrunning or frequently running queries which are using > > > parallel paths using all max_parallel_workers and precluding other queries > > > from using parallel query ? Or, are semi-short queries sometimes precluding > > > longrunning queries from using parallelism, when the long queries would > > > better benefit ? > > This patch is storing the number of times the query was planned/executed using > > parallelism, but not the number of workers. Would it make sense to instead > > store the the *number* of workers launched/planned ? Otherwise, it might be > > that a query is consistently planned to use a large number of workers, but then > > runs with few. I'm referring to the fields shown in "explain/analyze". (Then, > > the 2nd field should be renamed to "launched"). > > > > Workers Planned: 2 > > Workers Launched: 2 > > The main idea of the patch is to store the number of times the statements > were planned and executed in parallel, not the number of workers used in the > execution. Of course, what you mention can be helpful, it will be given a > review to see how it can be achieved I think you would need both information. With your current patch it only says if the plan and execution had parallelism enabled, but not if it could actually use with parallelism at all. It gives some information, but it's not that useful on its own. Also, a cumulated number of workers isn't really useful if you don't know what fraction of the number of executions (or planning) they refer to. That being said, I'm not sure how exactly the information about the number of workers can be exposed, as there might be multiple gathers per plan and AKAIK they can run at different part of the query execution. So in some case having a total of 3 workers planned means that you ideally needed 3 workers available at the same time, and in some other case it might be only 2 or even 1.
          • Jump to comment-1
            asotolongo@gmail.com2022-07-22T18:11:35+00:00
            On 22-07-22 12:08, Julien Rouhaud wrote: > Hi, > > On Fri, Jul 22, 2022 at 11:17:52AM -0400, Anthony Sotolongo wrote: >> On 21-07-22 20:35, Justin Pryzby wrote: >>> On Thu, Jul 21, 2022 at 06:26:58PM -0400, Anthony Sotolongo wrote: >>>> Hi all: >>>> Here's a patch to add counters about  planned/executed  for parallelism  to >>>> pg_stat_statements, as a way to follow-up on if the queries are >>>> planning/executing with parallelism, this can help to understand if you have >>>> a good/bad configuration or if your hardware is enough >>> +1, I was missing something like this before, but it didn't occur to me to use >>> PSS: >> First of all, thanks for review the the patch and for the comments >> >> >>> https://www.postgresql.org/message-id/20200310190142.GB29065@telsasoft.com >>>> My hope is to answer to questions like these: >>>> >>>> . is query (ever? usually?) using parallel paths? >>>> . is query usefully using parallel paths? >>>> . what queries are my max_parallel_workers(_per_process) being used for ? >>>> . Are certain longrunning or frequently running queries which are using >>>> parallel paths using all max_parallel_workers and precluding other queries >>>> from using parallel query ? Or, are semi-short queries sometimes precluding >>>> longrunning queries from using parallelism, when the long queries would >>>> better benefit ? >>> This patch is storing the number of times the query was planned/executed using >>> parallelism, but not the number of workers. Would it make sense to instead >>> store the the *number* of workers launched/planned ? Otherwise, it might be >>> that a query is consistently planned to use a large number of workers, but then >>> runs with few. I'm referring to the fields shown in "explain/analyze". (Then, >>> the 2nd field should be renamed to "launched"). >>> >>> Workers Planned: 2 >>> Workers Launched: 2 >> The main idea of the patch is to store the number of times the statements >> were planned and executed in parallel, not the number of workers used in the >> execution. Of course, what you mention can be helpful, it will be given a >> review to see how it can be achieved > I think you would need both information. > > With your current patch it only says if the plan and execution had parallelism > enabled, but not if it could actually use with parallelism at all. It gives > some information, but it's not that useful on its own. The original idea of this patch was  identify when occurred some of the circumstances under which it was  impossible to execute that plan in parallel at execution time as mentioned on the documentation at [1] For example: Due to the different client configuration, the execution behavior can be  different , and can affect the performance: As you can see in the above execution plan From psql             ->  Gather Merge  (cost=779747.43..795700.62 rows=126492 width=40) (actual time=1109.515..1472.369 rows=267351 loops=1)                   Output: t.entity_node_id, t.configuration_id, t.stream_def_id, t.run_type_id, t.state_datetime, (PARTIAL count(1))                   Workers Planned: 6                   Workers Launched: 6                   ->  Partial GroupAggregate (cost=778747.33..779327.09 rows=21082 width=40) (actual time=889.129..974.028 rows=38193 loops=7) From jdbc (from dbeaver)             ->  Gather Merge  (cost=779747.43..795700.62 rows=126492 width=40) (actual time=4383.576..4385.856 rows=398 loops=1)                   Output: t.entity_node_id, t.configuration_id, t.stream_def_id, t.run_type_id, t.state_datetime, (PARTIAL count(1))                   Workers Planned: 6                   Workers Launched: 0                   ->  Partial GroupAggregate (cost=778747.33..779327.09 rows=21082 width=40) (actual time=4383.574..4385.814 rows=398 loops=1) This example was  discussed also at this Thread [2] With these PSS counters will be easily identified when some of these causes are happening.  [1] https://www.postgresql.org/docs/current/when-can-parallel-query-be-used.html  [2] https://www.postgresql.org/message-id/flat/32277_1555482629_5CB6C805_32277_8_1_A971FB43DFBC3D4C859ACB3316C9FF4632D98B37%40OPEXCAUBM42.corporate.adroot.infra.ftgroup > > Also, a cumulated number of workers isn't really useful if you don't know what > fraction of the number of executions (or planning) they refer to. We will try to investigate how to do this. > > That being said, I'm not sure how exactly the information about the number of > workers can be exposed, as there might be multiple gathers per plan and AKAIK > they can run at different part of the query execution. So in some case having > a total of 3 workers planned means that you ideally needed 3 workers available > at the same time, and in some other case it might be only 2 or even 1.
            • Jump to comment-1
              rjuju123@gmail.com2022-07-23T04:03:34+00:00
              Hi, On Fri, Jul 22, 2022 at 02:11:35PM -0400, Anthony Sotolongo wrote: > > On 22-07-22 12:08, Julien Rouhaud wrote: > > > > With your current patch it only says if the plan and execution had parallelism > > enabled, but not if it could actually use with parallelism at all. It gives > > some information, but it's not that useful on its own. > > The original idea of this patch was  identify when occurred some of the > circumstances under which it was  impossible to execute that plan in > parallel at execution time > > as mentioned on the documentation at [1] > > For example: > > Due to the different client configuration, the execution behavior can be  > different , and can affect the performance: > > As you can see in the above execution plan > > > From psql > >             ->  Gather Merge  (cost=779747.43..795700.62 rows=126492 > width=40) (actual time=1109.515..1472.369 rows=267351 loops=1) >                   Output: t.entity_node_id, t.configuration_id, > t.stream_def_id, t.run_type_id, t.state_datetime, (PARTIAL count(1)) >                   Workers Planned: 6 >                   Workers Launched: 6 >                   ->  Partial GroupAggregate (cost=778747.33..779327.09 > rows=21082 width=40) (actual time=889.129..974.028 rows=38193 loops=7) > > From jdbc (from dbeaver) > >             ->  Gather Merge  (cost=779747.43..795700.62 rows=126492 > width=40) (actual time=4383.576..4385.856 rows=398 loops=1) >                   Output: t.entity_node_id, t.configuration_id, > t.stream_def_id, t.run_type_id, t.state_datetime, (PARTIAL count(1)) >                   Workers Planned: 6 >                   Workers Launched: 0 >                   ->  Partial GroupAggregate (cost=778747.33..779327.09 > rows=21082 width=40) (actual time=4383.574..4385.814 rows=398 loops=1) > > This example was  discussed also at this Thread [2] > > With these PSS counters will be easily identified when some of these causes > are happening. I agree it can be hard to identify, but I don't think that your proposed approach is enough to be able to do so. There's no guarantee of an exact 1:1 mapping between planning and execution, so you could totally see the same value for parallel_planned and parallel_exec and still have the dbeaver behavior happening. If you want to be able to distinguish "plan was parallel but execution was forced to disable it" from "plan wasn't parallel, so was the execution", you need some specific counters for both situations.
              • Jump to comment-1
                asotolongo@gmail.com2022-07-25T19:19:22+00:00
                On 23-07-22 00:03, Julien Rouhaud wrote: > Hi, > > On Fri, Jul 22, 2022 at 02:11:35PM -0400, Anthony Sotolongo wrote: >> On 22-07-22 12:08, Julien Rouhaud wrote: >>> With your current patch it only says if the plan and execution had parallelism >>> enabled, but not if it could actually use with parallelism at all. It gives >>> some information, but it's not that useful on its own. >> The original idea of this patch was  identify when occurred some of the >> circumstances under which it was  impossible to execute that plan in >> parallel at execution time >> >> as mentioned on the documentation at [1] >> >> For example: >> >> Due to the different client configuration, the execution behavior can be >> different , and can affect the performance: >> >> As you can see in the above execution plan >> >> >> From psql >> >>             ->  Gather Merge  (cost=779747.43..795700.62 rows=126492 >> width=40) (actual time=1109.515..1472.369 rows=267351 loops=1) >>                   Output: t.entity_node_id, t.configuration_id, >> t.stream_def_id, t.run_type_id, t.state_datetime, (PARTIAL count(1)) >>                   Workers Planned: 6 >>                   Workers Launched: 6 >>                   ->  Partial GroupAggregate (cost=778747.33..779327.09 >> rows=21082 width=40) (actual time=889.129..974.028 rows=38193 loops=7) >> >> From jdbc (from dbeaver) >> >>             ->  Gather Merge  (cost=779747.43..795700.62 rows=126492 >> width=40) (actual time=4383.576..4385.856 rows=398 loops=1) >>                   Output: t.entity_node_id, t.configuration_id, >> t.stream_def_id, t.run_type_id, t.state_datetime, (PARTIAL count(1)) >>                   Workers Planned: 6 >>                   Workers Launched: 0 >>                   ->  Partial GroupAggregate (cost=778747.33..779327.09 >> rows=21082 width=40) (actual time=4383.574..4385.814 rows=398 loops=1) >> >> This example was  discussed also at this Thread [2] >> >> With these PSS counters will be easily identified when some of these causes >> are happening. > I agree it can be hard to identify, but I don't think that your proposed > approach is enough to be able to do so. There's no guarantee of an exact 1:1 > mapping between planning and execution, so you could totally see the same value > for parallel_planned and parallel_exec and still have the dbeaver behavior > happening. > > If you want to be able to distinguish "plan was parallel but execution was > forced to disable it" from "plan wasn't parallel, so was the execution", you > need some specific counters for both situations. Thanks for your time and feedback, yes we were missing some details, so we need to rethink some points to continue
                • Jump to comment-1
                  daymelbonne@gmail.com2022-07-29T13:36:44+00:00
                  Hi, El lun, 25 jul 2022 a la(s) 14:19, Anthony Sotolongo (asotolongo@gmail.com) escribió: > On 23-07-22 00:03, Julien Rouhaud wrote: > > Hi, > > > > On Fri, Jul 22, 2022 at 02:11:35PM -0400, Anthony Sotolongo wrote: > >> On 22-07-22 12:08, Julien Rouhaud wrote: > >>> With your current patch it only says if the plan and execution had > parallelism > >>> enabled, but not if it could actually use with parallelism at all. It > gives > >>> some information, but it's not that useful on its own. > >> The original idea of this patch was identify when occurred some of the > >> circumstances under which it was impossible to execute that plan in > >> parallel at execution time > >> > >> as mentioned on the documentation at [1] > >> > >> For example: > >> > >> Due to the different client configuration, the execution behavior can be > >> different , and can affect the performance: > >> > >> As you can see in the above execution plan > >> > >> > >> From psql > >> > >> -> Gather Merge (cost=779747.43..795700.62 rows=126492 > >> width=40) (actual time=1109.515..1472.369 rows=267351 loops=1) > >> Output: t.entity_node_id, t.configuration_id, > >> t.stream_def_id, t.run_type_id, t.state_datetime, (PARTIAL count(1)) > >> Workers Planned: 6 > >> Workers Launched: 6 > >> -> Partial GroupAggregate (cost=778747.33..779327.09 > >> rows=21082 width=40) (actual time=889.129..974.028 rows=38193 loops=7) > >> > >> From jdbc (from dbeaver) > >> > >> -> Gather Merge (cost=779747.43..795700.62 rows=126492 > >> width=40) (actual time=4383.576..4385.856 rows=398 loops=1) > >> Output: t.entity_node_id, t.configuration_id, > >> t.stream_def_id, t.run_type_id, t.state_datetime, (PARTIAL count(1)) > >> Workers Planned: 6 > >> Workers Launched: 0 > >> -> Partial GroupAggregate (cost=778747.33..779327.09 > >> rows=21082 width=40) (actual time=4383.574..4385.814 rows=398 loops=1) > >> > >> This example was discussed also at this Thread [2] > >> > >> With these PSS counters will be easily identified when some of these > causes > >> are happening. > > I agree it can be hard to identify, but I don't think that your proposed > > approach is enough to be able to do so. There's no guarantee of an > exact 1:1 > > mapping between planning and execution, so you could totally see the > same value > > for parallel_planned and parallel_exec and still have the dbeaver > behavior > > happening. > > > > If you want to be able to distinguish "plan was parallel but execution > was > > forced to disable it" from "plan wasn't parallel, so was the execution", > you > > need some specific counters for both situations. > > Thanks for your time and feedback, yes we were missing some details, so > we need to rethink some points to continue > We have rewritten the patch and added the necessary columns to have the number of times a parallel query plan was not executed using parallelism. We are investigating how to add more information related to the workers created by the Gather/GatherMerge nodes, but it is not a trivial task. Regards.